首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   8756篇
  免费   2423篇
  国内免费   1530篇
工业技术   12709篇
  2024年   69篇
  2023年   287篇
  2022年   441篇
  2021年   500篇
  2020年   520篇
  2019年   370篇
  2018年   314篇
  2017年   413篇
  2016年   460篇
  2015年   504篇
  2014年   663篇
  2013年   606篇
  2012年   811篇
  2011年   854篇
  2010年   734篇
  2009年   719篇
  2008年   741篇
  2007年   746篇
  2006年   588篇
  2005年   557篇
  2004年   415篇
  2003年   353篇
  2002年   232篇
  2001年   168篇
  2000年   121篇
  1999年   111篇
  1998年   82篇
  1997年   57篇
  1996年   37篇
  1995年   39篇
  1994年   30篇
  1993年   21篇
  1992年   22篇
  1991年   14篇
  1990年   12篇
  1989年   17篇
  1988年   10篇
  1987年   5篇
  1986年   4篇
  1985年   13篇
  1984年   7篇
  1983年   13篇
  1982年   3篇
  1981年   4篇
  1980年   4篇
  1979年   3篇
  1978年   2篇
  1977年   2篇
  1961年   2篇
  1959年   2篇
排序方式: 共有10000条查询结果,搜索用时 22 毫秒
1.
为了提升脑胶质瘤分割精度,提出一种结合注意力机制的3D卷积神经网络算法。输入3个不同尺度的图像块,经过9个卷积层和1个分类层后得到3个不同的分类结果,将分类结果与注意力学习到的权重相乘并逐体素相加得到输出。此外该算法采用了一种混合Dice损失函数与Focal损失函数的超参数损失函数。实验表明,该算法的Dice系数在整体区域、核心区域以及增强区域分别达到了95.31%、80.12%、82.25%。与已有的一种脑胶质瘤分割算法deepmedic相比,整体区域、核心区域以及增强区域的Dice系数分别提升了3%、2%、6%。在脑胶质瘤分割方面,具有重要的临床意义。  相似文献   
2.
针对利用深度学习方法对街道图像进行深度估计,提出采用语义分割的方法解决深度图出现边界模糊等问题;估计深度通过左右视角图生成视差图进行无监督的训练。在网络模型中添加语义分割层,采取多个空洞卷积并行的结构增加感受野,同时减少了图像下采样的次数,降低了由于下采样带来的信息损失,使得的结果更加准确。这也是在深度估计中首次与空洞卷积相结合增加准确率。通过对KITTI街道数据集进行训练,与现有结果相比,除了增加检测准确性,降低错误率之外,使得效果图中的物体更加清晰,并且在效果图中还保留了一些原模型中被忽视掉的细节信息,将原始图像更加完整的表现出来。  相似文献   
3.
张兵  杨雪花 《煤炭科技》2020,41(1):35-38
在铁路运煤装车过程中为了快速、准确地识别车号,提出一种基于机器视觉的运煤车车号识别技术。将连通区域提取与投影分割法结合,实现车号的粗定位、细分割,并对图像中的断裂字符进行二次分割,构建了基于BP神经网络的分类模型进行车号识别,提升了煤炭装车的效率和精度。  相似文献   
4.
Deep learning has gained a significant popularity in recent years thanks to its tremendous success across a wide range of relevant fields of applications, including medical image analysis domain in particular. Although convolutional neural networks (CNNs) based medical applications have been providing powerful solutions and revolutionizing medicine, efficiently training of CNNs models is a tedious and challenging task. It is a computationally intensive process taking long time and rare system resources, which represents a significant hindrance to scientific research progress. In order to address this challenge, we propose in this article, R2D2, a scalable intuitive deep learning toolkit for medical imaging semantic segmentation. To the best of our knowledge, the present work is the first that aims to tackle this issue by offering a novel distributed versions of two well-known and widely used CNN segmentation architectures [ie, fully convolutional network (FCN) and U-Net]. We introduce the design and the core building blocks of R2D2. We further present and analyze its experimental evaluation results on two different concrete medical imaging segmentation use cases. R2D2 achieves up to 17.5× and 10.4× speedup than single-node based training of U-Net and FCN, respectively, with a negligible, though still unexpected segmentation accuracy loss. R2D2 offers not only an empirical evidence and investigates in-depth the latest published works but also it facilitates and significantly reduces the effort required by researchers to quickly prototype and easily discover cutting-edge CNN configurations and architectures.  相似文献   
5.
目的 在视觉引导的工业机器人自动拾取研究中,关键技术难点之一是机器人抓取目标区域的识别问题。特别是金属零件,其表面的反光、随意摆放时相互遮挡等非结构化因素都给抓取区域的识别带来巨大的挑战。因此,本文提出一种结合深度学习和支持向量机的抓取区域识别方法。方法 分别提取抓取区域的方向梯度直方图(HOG)和局部二进制模式(LBP)特征,利用主成分分析法(PCA)对融合后的特征进行降维,以此来训练支持向量机(SVM)分类器。通过训练Mask R-CNN(regions with convolutional neural network)神经网络完成抓取区域的初步分割。然后利用SVM对Mask R-CNN识别的抓取区域进行二次分类,完成对干扰区域的剔除。最后计算掩码完成实例分割,以此达到对抓取区域的精确识别。结果 对于随机摆放的铜质金属零件,本文算法与单一的Mask R-CNN及多特征融合的SVM算法就识别准确率、错检率、漏检率3个指标进行了比较,结果表明本文算法在识别准确率上较Mask R-CNN和SVM算法分别提高了7%和25%,同时有效降低了错检率与漏检率。结论 本文算法结合了Mask R-CNN与SVM两种方法,对于反光和遮挡情况具有一定的鲁棒性,同时有效地提升了目标识别的准确率。  相似文献   
6.
The segmentation of specific tissues in an MR brain image for quantitative analysis can assist the disease diagnosis and medical research. Therefore, a robust and accurate method for automatic segmentation is necessary. Atlas-based-method is a common and effective method of automatic segmentation where an atlas refers to a pair of image consist of an intensity image and its corresponding label image. Apart from the general multi-atlas-based methods, which propagate labels through the single atlas then fuse them, we proposed a hybrid atlas forest based on confidence-weighted probability matrix to consider the atlases set as a whole and treat each voxel differently. In the framework, we first register the atlas to the image space of target and calculate the confidence of voxels in the registered atlas. Then, a confidence-weighted probability matrix is generated and it augments to the intensity image of the atlas or target for providing spatial information of the target tissue. Third, a hybrid atlas forest is trained to gather the features and correlation information among the atlases in the dataset. Finally, the segmentation of the target tissues is predicted by the trained hybrid atlas forest. The segment performance and the components efficiency of the proposed method are evaluated on the two public datasets. Based on the experiment results and quantitative comparisons, our method can gather spatial information and correlation among the atlases to obtain an accurate segmentation.  相似文献   
7.
Objective and quantitative assessment of skin conditions is essential for cosmeceutical studies and research on skin aging and skin regeneration. Various handcraft-based image processing methods have been proposed to evaluate skin conditions objectively, but they have unavoidable disadvantages when used to analyze skin features accurately. This study proposes a hybrid segmentation scheme consisting of Deeplab v3+ with an Inception-ResNet-v2 backbone, LightGBM, and morphological processing (MP) to overcome the shortcomings of handcraft-based approaches. First, we apply Deeplab v3+ with an Inception-ResNet-v2 backbone for pixel segmentation of skin wrinkles and cells. Then, LightGBM and MP are used to enhance the pixel segmentation quality. Finally, we determine several skin features based on the results of wrinkle and cell segmentation. Our proposed segmentation scheme achieved a mean accuracy of 0.854, mean of intersection over union of 0.749, and mean boundary F1 score of 0.852, which achieved 1.1%, 6.7%, and 14.8% improvement over the panoptic-based semantic segmentation method, respectively.  相似文献   
8.
最小二乘回归(LSR)算法是一种常见的子空间分割方法,由于LSR具有解析解,因此它的聚类性能较高。然而LSR算法是应用谱聚类方法聚类数据,谱聚类方法初始化聚类中心是随机的,会影响后面的聚类效果。针对这一问题,提出一种基于聚类中心局部密度和距离这2个特点的改进的LSR算法(LSR-DC)。在Extended Yale B数据集上进行实验,结果表明,该算法有较高的聚类精度,具有一定的鲁棒性,优于现有LSR等子空间分割方法。  相似文献   
9.
In the digestion of amino acids, carbohydrates, and lipids, as well as protein synthesis from the consumed food, the liver has many diverse responsibilities and functions that are to be performed. Liver disease may impact the hormonal and nutritional balance in the human body. The earlier diagnosis of such critical conditions may help to treat the patient effectively. A computationally efficient AW-HARIS algorithm is used in this paper to perform automated segmentation of CT scan images to identify abnormalities in the human liver. The proposed approach can recognize the abnormalities with better accuracy without training, unlike in supervisory procedures requiring considerable computational efforts for training. In the earlier stages, the CT images are pre-processed through an Adaptive Multiscale Data Condensation Kernel to normalize the underlying noise and enhance the image’s contrast for better segmentation. Then, the preliminary phase’s outcome is being fed as the input for the Anisotropic Weighted–-Heuristic Algorithm for Real-time Image Segmentation algorithm that uses texture-related information, which has resulted in precise outcome with acceptable computational latency when compared to that of its counterparts. It is observed that the proposed approach has outperformed in the majority of the cases with an accuracy of 78%. The smart diagnosis approach would help the medical staff accurately predict the abnormality and disease progression in earlier ailment stages.  相似文献   
10.
In this paper, a resubmitted sampling-based successive sampling over two successive occasions control chart is proposed to monitor the underlying characteristic of interest. Auxiliary information of the first occasion is utilized to monitor the relative change in the study variable over the second occasion successively despite high degree of correlation. The structural and operational design is presented along with the comparative performance evaluation. The average run length is used as a performance evaluation measure and proved the argument in favor of the presented concept in comparison with the other auxiliary data control charts. The implementation is explained through two real examples.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号